Distributed cooperative decision making in multi-agent multi-armed bandits

نویسندگان

چکیده

We study a distributed decision-making problem in which multiple agents face the same multi-armed bandit (MAB), and each agent makes sequential choices among arms to maximize its own individual reward. The cooperate by sharing their estimates over fixed communication graph. consider an unconstrained reward model two or more can choose arm collect independent rewards. And we constrained that at time receive no design dynamic, consensus-based, estimation algorithm for cooperative of mean rewards arm. leverage from this develop algorithms: coop-UCB2 coop-UCB2-selective-learning, models, respectively. show both algorithms achieve group performance close centralized fusion center. Further, investigate influence graph structure on performance. propose novel explore–exploit index predicts relative groups terms graph, nodal centrality locations

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Distributed Exploration in Multi-Armed Bandits

We study exploration in Multi-Armed Bandits in a setting where k players collaborate in order to identify an ε-optimal arm. Our motivation comes from recent employment of bandit algorithms in computationally intensive, large-scale applications. Our results demonstrate a non-trivial tradeoff between the number of arm pulls required by each of the players, and the amount of communication between ...

متن کامل

Coordinated Versus Decentralized Exploration In Multi-Agent Multi-Armed Bandits

In this paper, we introduce a multi-agent multi-armed bandit-based model for ad hoc teamwork with expensive communication. The goal of the team is to maximize the total reward gained from pulling arms of a bandit over a number of epochs. In each epoch, each agent decides whether to pull an arm and hence collect a reward, or to broadcast the reward it obtained in the previous epoch to the team a...

متن کامل

Contextual Multi-Armed Bandits

We study contextual multi-armed bandit problems where the context comes from a metric space and the payoff satisfies a Lipschitz condition with respect to the metric. Abstractly, a contextual multi-armed bandit problem models a situation where, in a sequence of independent trials, an online algorithm chooses, based on a given context (side information), an action from a set of possible actions ...

متن کامل

Staged Multi-armed Bandits

In conventional multi-armed bandits (MAB) and other reinforcement learning methods, the learner sequentially chooses actions and obtains a reward (which can be possibly missing, delayed or erroneous) after each taken action. This reward is then used by the learner to improve its future decisions. However, in numerous applications, ranging from personalized patient treatment to personalized web-...

متن کامل

Mortal Multi-Armed Bandits

We formulate and study a new variant of the k-armed bandit problem, motivated by e-commerce applications. In our model, arms have (stochastic) lifetime after which they expire. In this setting an algorithm needs to continuously explore new arms, in contrast to the standard k-armed bandit model in which arms are available indefinitely and exploration is reduced once an optimal arm is identified ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Automatica

سال: 2021

ISSN: ['1873-2836', '0005-1098']

DOI: https://doi.org/10.1016/j.automatica.2020.109445